Fundamentals of Computer Graphics, Image Processing, and Vision

Exercise 2: Epipolar Geometry

In this exercise you will explore estimating the relationship between two camera views using the epipolar geometry concepts learned in the lectures.

NOTE: When submitting the Colab notebook, please remove any unnecessary/debug cells. That is, your notebook should be clean to make it easy for the grader to follow.


Student 1:
Name: Or Daniel ID: 208391938

Student 2:
Name: Nofar Haim ID: 312495328


In [79]:
import numpy as np
import cv2
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

%matplotlib inline
In [82]:
# utility function for reading the 2d points from the given txt files
def load_file(name):
    lst = []
    f = open(name, 'r')
    for line in f:
        lst.append(line.strip().split())
    f.close()
    return np.array(lst, dtype=np.float32)
In [83]:
file =!wget https://www.dropbox.com/sh/fhkb5ywnlw3uh38/AAAVZGyrREQtLqi1tQAisPXXa?dl=0
In [86]:
s=file[-2].split()[5][1:-1]
!unzip "$s" -d input

Part 1: Fundamental Matrix Estimation

Recall that we are interested in solving the correspondence problem. Specifically, given two images taken from cameras at different positions. How do we match a point in the first image to a point in the second image?

We have seen that it is possible to learn a mapping of points in one image to lines in another image using the Fundamental Matrix.

To do so, we have provided you with two scenes.

  • In the first scene, sceneA, we have provided you with corresponding point locations listed in sceneA-pts2d-1.txt and sceneA-pts2d-2.txt.
  • In the second scene, sceneB, we ask you to compute the corresponding points yourself. Please use the provided utility script tag_image.py to do so.

Hint: When solving for the fundamental matrix $F$, think about how many correpsonding points we need. What happens if we have more pairs than required?

Question 1

Write a function that computes the Fundamental Matrix $F$ that satisfies the epipolar constraints defined by the sets of corresponding points using least squares.

In [87]:
# Compute the Fundamental Matrix for Scene A
pts_a = load_file('input/sceneA/sceneA-pts2d-1.txt')
pts_b = load_file('input/sceneA/sceneA-pts2d-2.txt')

####################
from numpy.linalg import linalg

def estimateF(pts_a, pts_b):
    num_rows = pts_a.shape[0]
    onelist = [1 for i in range(0, num_rows)]
    pts_a_homog = np.concatenate((pts_a, np.array(onelist)[:, None]), axis=1)
    pts_b_homog = np.concatenate((pts_b, np.array(onelist)[:, None]), axis=1)

    A = np.empty((num_rows, 8), dtype=np.float64)

    for i in range(0, num_rows):
        A[i] = [pts_b_homog[i][0] * pts_a_homog[i][k] for k in range(0, 3)] + \
               [pts_b_homog[i][1] * pts_a_homog[i][k] for k in range(0, 3)] + \
               [pts_b_homog[i][2] * pts_a_homog[i][k] for k in range(0, 2)]

    b = np.array([-1 for i in range(0, num_rows)], dtype=np.float64)

    x = linalg.lstsq(A, b, rcond=None)[0]
    x = np.append((x), [1], axis=0)
    F_sceneA = np.reshape(x, (3, 3))
    return (F_sceneA)####################

F_sceneA = estimateF(pts_a, pts_b)
print(f'Scene A: Fundametal Matrix with Rank = 3: \n {F_sceneA} \n')
Scene A: Fundametal Matrix with Rank = 3: 
 [[ 2.67713722e-07 -2.57763422e-06  1.12898577e-03]
 [ 7.43898031e-06  3.90196194e-06  7.31449001e-03]
 [-2.43366396e-03 -1.14515247e-02  1.00000000e+00]] 

Recall that $F$ is only known up to a scale factor. To make sure that your code is correct, if you solved for $F$ correctly, you should get a matrix that is a scaled equivalent to the following for sceneA:

 F =  [[ 2.67713722e-07 -2.57763422e-06  1.12898577e-03]
       [ 7.43898031e-06  3.90196194e-06  7.31449001e-03]
       [-2.43366396e-03 -1.14515247e-02  1.00000000e+00]] 
In [88]:
# Compute the Fundamental Matrix for Scene B
####################
B_pts_a = load_file('input/sceneB/pts1.txt')
B_pts_b = load_file('input/sceneB/pts2.txt')
####################
F_sceneB = estimateF(B_pts_a, B_pts_b)
print(f'Scene B: Fundametal Matrix with Rank = 3: \n {F_sceneB} \n')
Scene B: Fundametal Matrix with Rank = 3: 
 [[-9.90461901e-07 -1.54289669e-05  4.05658632e-03]
 [ 2.05963208e-05  4.33956461e-06 -4.10851641e-02]
 [-4.00746059e-03  3.56446173e-02  1.00000000e+00]] 

Question: Notice that the least squares estimate of $F$ is of full rank. However, the Fundamental Matrix should be a matrix of rank $2$. Why is the fundamental matrix of rank $2$? (We require you to show only the upper bound. That is, that $F$ is not of full rank).

Answer: See our answer in PDF file


Question 2

From the proof you provided above, $F$ should be of rank 2. Therefore, we must reduce the rank of the estimated $F$. To do so, we can decompose $F$ using singular value decomposition (SVD) into the matrices $U\Sigma V^T = F$. We can then estimate a rank 2 matrix by setting the smallest singular value in $\Sigma$ to zero thus generating $\Sigma'$. The Fundamental Matrix is then easily calculated as $F=U\Sigma 'V^T$.

Compute the fundamental matrix $F$ of rank 2 using SVD as described above.

In [90]:
# Compute the Fundamental Matrix of Rank 2 for Scene A
####################
def rank2F(f):
    u, s, vh= np.linalg.svd(f, full_matrices=True)
    s[2]=0
    f2=np.dot(u*s, vh)
    return f2
####################
F_sceneA_rank_2 = rank2F(F_sceneA)
print(f'Scene A: Fundametal Matrix with Rank = 2: \n {F_sceneA_rank_2} \n')
Scene A: Fundametal Matrix with Rank = 2: 
 [[ 2.35783095e-07 -2.56843936e-06  1.12898579e-03]
 [ 7.44275393e-06  3.90087527e-06  7.31449001e-03]
 [-2.43366395e-03 -1.14515247e-02  1.00000000e+00]] 

In [10]:
# Compute the Fundamental Matrix of Rank 2 for Scene B
####################
## YOUR CODE HERE 
####################

F_sceneB_rank_2 = rank2F(F_sceneB)
print(f'Scene A: Fundametal Matrix with Rank = 2: \n {F_sceneB_rank_2} \n')
Scene A: Fundametal Matrix with Rank = 2: 
 [[-5.71494432e-07 -1.53878905e-05  4.05658654e-03]
 [ 2.06419549e-05  4.34403868e-06 -4.10851641e-02]
 [-4.00746041e-03  3.56446173e-02  1.00000000e+00]] 


Question 3

Given our fundamental matrix $F$ of rank 2, we can now compute the epipolar line $\ell_b$ in image $B$ corresponding to a point $p_a$ in image $A$: $\ell_b = Fp_a$

A similar equation can be used for computing the epipolar lines in image $A$.

You are now tasked with drawing these epipolar lines.

Observe that drawing the epipolar lines on the image plane is not trivial. This is because the lines $\ell_i$ are defined in homogeneous coordinates whereas the standard line function takes two input points.

In order to find two such points, we can find the intersection of a given line $\ell_i$ with the image boundaries.

To do so, we need to compute two things:

  • The line $\ell_L$ corresponding to the left-hand side of the image.
  • The line $\ell_R$ corresponding to the right-hand side of the image.

Once, we have these lines, we can compute the intersection point $P_{i,L}$ between $\ell_i$ and $\ell_L$ and intersection point $P_{i,R}$ between $\ell_i$ and $\ell_R$.

Once we have these two points, we can compute the line running through the points $P_{i,L}$ and $P_{i,R}$.


To summarize, in this question you are required to compute the following:

  1. The lines corresponding to the left-hand and right-hand boundaries of the image.
  2. The intersection point between the image boundaries and the given epipolar line.

After computing these lines, you should draw the estimated epipolar lines on each of the two images. For each scene, please display the two images side-by-side with the epipolar lines drawn on each.


Hint: The intersection between two lines $l_1$ and $l_2$ is a point $p$ in homogeneus coordinates s.t. $p\cdot l_1=0$ and $p\cdot l_2=0$, that is a point $p$ which is perpendicular to both $l_1$ and $l_2$.

In [91]:
# Draw the epipolar lines on the images of Scene A and display + save the images side-by-side
####################
def computeEpipolarLines(pts_a, pts_b, f,imagePaths):
    num_rows = pts_a.shape[0]

    #use homogeneous coordinates
    onelist = [1 for i in range(0, num_rows)]
    pts_a_homog = np.concatenate((pts_a, np.array(onelist)[:, None]), axis=1)
    pts_b_homog = np.concatenate((pts_b, np.array(onelist)[:, None]), axis=1)

    imageB=cv2.imread(imagePaths[0])
    imageA = cv2.imread(imagePaths[1])

    height, width, channels = imageB.shape

    lL=np.array([1,0,0])
    lR=np.array([-1,0,width])


    #compute epipolar lines in image B
    for i in range(0,num_rows):
        lb_i=np.dot(f,pts_a_homog[i])
        pl=[0,(-lb_i[2]/lb_i[1])]
        x=width
        pr=[width, (-lb_i[2]-(lb_i[0]*width))/lb_i[1]]
        cv2.line(imageB, (int(pl[0]), int(pl[1])), (int(pr[0]), int(pr[1])),(0, 255, 0), 1)
        la_i=np.dot(f.T, pts_b_homog[i])
        pl2=[0,(-la_i[2]/la_i[1])]
        pr2 = [width, (-la_i[2] - (la_i[0] * width)) / la_i[1]]
        cv2.line(imageA, (int(pl2[0]), int(pl2[1])), (int(pr2[0]), int(pr2[1])), (0, 255, 0), 1)
    return (imageA, imageB)

def showPlotRGB(img1 ,img2):
    plt.figure(figsize=(16, 16))
    plt.subplot(1, 2, 1)
    plt.imshow(cv2.cvtColor(img1, cv2.COLOR_BGR2RGB))
    plt.subplot(1, 2, 2)
    plt.imshow(cv2.cvtColor(img2, cv2.COLOR_BGR2RGB))
    plt.show()

imgaePathsA=['input/sceneA/sceneA-im-2.png', 'input/sceneA/sceneA-im-1.png']
img1A , img2A=computeEpipolarLines(pts_a, pts_b, F_sceneA_rank_2 ,imgaePathsA)
showPlotRGB(img1A, img2A)
####################
In [92]:
# Draw the epipolar lines on the images of Scene B and display + save the images side-by-side
#################a###
imgaePathsB=['input/sceneB/sceneB-im-2.png', 'input/sceneB/sceneB-im-1.png']
img1B, img2B= computeEpipolarLines(B_pts_a, B_pts_b, F_sceneB_rank_2,imgaePathsB)
showPlotRGB(img1B, img2B)
####################

Question 4: BONUS (5-10pts)

Take a look at the results of the last section. You may notice that the epipolar lines are not exact.

The problem here is that the offset and scale of the points is large and biased compared to some of the constants. To fix this, we can normalize the points.

Specifically, we want to construct a transformation $T$ that will make the mean of the points $0$ and scale the points to magnitude $1$. In other words, we wish to find: $$ \begin{equation*} \begin{pmatrix} {u'} \ {v'} \ {1} \

\end{pmatrix}

\begin{pmatrix} {s} & {0} & {0} \\ {0} & {s} & {0} \\ {0} & {0} & {1} \\ \end{pmatrix}\begin{pmatrix} {1} & {0} & {-c_u} \\ {0} & {1} & {-c_v} \\ {0} & {0} & {1} \\ \end{pmatrix}\begin{pmatrix} {u} \\ {v} \\ {1} \\ \end{pmatrix}

\end{equation*} $$

The transform matrix $T$ is the product of the scale and offset matrices. Notice that $c_u, c_v$ are simply the mean of the points. To compute the scale $s$ you can first estimate the standard deviation after subtracting the means of the points. Then, $s$ can be defined as the reciprocal of the standard deviation.


Create two matrices $T_a$ and $T_b$ for the set of points defined in the files sceneB-pts2d-1.txt and sceneB-pts2d-2.txt, respectively.

Then, normalize the two sets of points to compute the new normalized fundamental matrix $\hat F$.

Note: make sure that $\hat F$ is of rank $2$!

(You only need to complete this for sceneB using the points provided to you)

In [13]:
# Complete the bonus. Compute the normalized Fundamental Matrix using the points given in `sceneB-pts2d-1.txt` and `sceneB-pts2d-2.txt` for `sceneB`. 
# Then visualize the epipolar lines obtained with the matrix you got.
####################
## YOUR CODE HERE 
####################

Part 2: Finding the Correspondences

Up to now, we've only discussed how one can compute the epipolar lines between two images.
In this section, we use a window-based approach for estimating dense stereo correspondence.

For this part, we will be working with the images named corr-img-l.png and corr-img-r.png found in input/correspondences.

Here, you will implement the stereo alogrithm and take a window around every pixel in one image and search for the best match in the other image. This concept is discussed in the class lecture (~slides 58-68).

The basic algorithm is defined as follows:

  1. For each epipolar line / horizontal scan line:
    a. For each pixel $p$ on the left line:
    1. Compare a window around $p$ with same window shifted along the same horizontal scan line in the other image.
    2. Pick location corresponding to the best matching window.

To help simplify the problem, we will assume that the two images are aligned horizontally. Therefore, when performing the matching, you only need to scan across the same horizontal scan line of the other image.

When computing the similarities, we will use the sum of squared differences (SSD) measure.


Question 1: Implementing SSD for Computing Disparity Map

Implement the SSD matching algorithm. The algorithm returns a disparity image $D$ where \begin{equation} L(y,x) = R(y,x+D(y,x)) \end{equation} when matching from the left image to the right image, denoted by $L$ and $R$, respectively.

Then, repeat this for matching from the right image to the left image. That is, you should output two images:

  1. $D_L = disparity(L,R)$
  2. $D_R = disparity(R,L)$

Then, change the values of:

  1. The window size
  2. The disparity range (i.e., the maximum number of pixels to slide the window over)

How does changing these values affect the results? Please clearly display and label the results in the notebook with the two disparity maps shown side-by-side.

Some notes to consider:

  • When computing the SSD between the two window patches, you may use cv2.matchTemplate and cv2.TM_SQDIFF_NORMED.
  • We provide you the ground truth images to allow you to compare your results. Don't worry if you don't get something similar to the ground truths.
  • If you see that the execution time takes a long time, you may want to consider down-sampling the input images. Hint: use cv2.pyrDown().

When displaying the images, please display them in grayscale!

Hint: To verify that your implementation is correct, think about the values you should obtain closer to the camera. Larger values should be shown in white with lower values shown in black.

In [93]:
L = cv2.imread('input/correspondences/corr-img-l.png', cv2.IMREAD_GRAYSCALE) * (1.0 / 255.0)
R = cv2.imread('input/correspondences/corr-img-r.png', cv2.IMREAD_GRAYSCALE) * (1.0 / 255.0)

After computing the disparity maps, you should display the results side-by-side, which can be done using the follow cell as a reference. Please note that when displaying the results you should clearly specify the parameters used to obtain the results!

In [94]:
plt.figure(figsize=(16, 16))
plt.subplot(1, 2, 1)
plt.imshow(L, cmap='gray')
plt.subplot(1, 2, 2)
plt.imshow(R, cmap='gray')
Out[94]:
<matplotlib.image.AxesImage at 0x7faafb73d110>
In [95]:
# Compute the disparity maps for multiple values of window size and maximum disparity
####################
def getWindow(img, window_size):
    height, width=img.shape
    half_window=int(window_size/2)
    for y in range(half_window, height-half_window):
        for x in range(half_window, width-half_window):
            yield(x,y, img[y-half_window:y+half_window, x-half_window:x+half_window])



#if direction=='LR' then img1 is Left image
#if direction=='RL' then img1 is Rigth image
def disparity(img1, img2, window_size, disparity_range,match_method, direction):
    height, width = img1.shape
    half_window = int(window_size / 2)
    disparity_map=np.zeros((height, width))
    img1 = img1.astype(np.float32)
    img2 = img2.astype(np.float32)

    #compute the disparity map
    for(x,y, window) in getWindow(img1, window_size):
        backward=x - disparity_range
        forward = x + disparity_range
        if(x+disparity_range>width):
            forward=width
        if(x-disparity_range<0):
            backward=0;
        #we will look for a correspondence only in one direction, according to the image (right or left):
        if(direction=='LR'):
            result = cv2.matchTemplate(img2[y-half_window:y+half_window, x:forward], window, method=match_method)
        elif(direction=='RL'):
            result = cv2.matchTemplate(img2[y - half_window:y + half_window, backward:x], window, method=match_method)
        _minVal, _maxVal, minLoc, maxLoc = cv2.minMaxLoc(result, None)
        if(match_method==cv2.TM_SQDIFF_NORMED):
            if(direction=='LR'):
                disparity_map[y,x]=minLoc[0]+half_window
            else:
                disparity_map[y,x] =x-(minLoc[0]+half_window+backward)
        else:
            if(direction=='LR'):
                disparity_map[y, x] = maxLoc[0]+half_window
            else:
                disparity_map[y, x] = x-(maxLoc[0]+half_window+backward)
    
    for y in range(half_window,height-half_window):
      for x in range(0,half_window):
        disparity_map[y, x]=disparity_map[y, half_window]
        disparity_map[y, width-1-x]=disparity_map[y, width-half_window-1]

    for y in range(0,half_window):
      for x in range(0, width):
        disparity_map[y,x]=disparity_map[y+half_window,x]
    for y in range(height-half_window,height):
      for x in range(0, width):
        disparity_map[y,x]=disparity_map[y-half_window-1,x]

    return disparity_map

def showPlot(img1 ,img2):
    plt.figure(figsize=(16, 16))
    plt.subplot(1, 2, 1)
    plt.imshow(img1, cmap='gray')
    plt.subplot(1, 2, 2)
    plt.imshow(img2, cmap='gray')
    plt.show()
####################

Compute disparity maps for 6 combinations of window and disparity range:

In [96]:
#w=9 disparity_range=110
DLR=disparity(L,R,9,110,cv2.TM_SQDIFF_NORMED, 'LR')
DRL=disparity(R,L,9,110,cv2.TM_SQDIFF_NORMED, 'RL')
showPlot(DLR,DRL)
In [47]:
#w=15 disparity_range=110
DLR=disparity(L,R,15,110,cv2.TM_SQDIFF_NORMED, 'LR')
DRL=disparity(R,L,15,110,cv2.TM_SQDIFF_NORMED, 'RL')
showPlot(DLR,DRL)
In [97]:
#w=3 disparity_range=110
DLR=disparity(L,R,3,110,cv2.TM_SQDIFF_NORMED, 'LR')
DRL=disparity(R,L,3,110,cv2.TM_SQDIFF_NORMED, 'RL')
showPlot(DLR,DRL)
In [49]:
#w=3 disparity_range=150
DLR=disparity(L,R,3,150,cv2.TM_SQDIFF_NORMED, 'LR')
DRL=disparity(R,L,3,150,cv2.TM_SQDIFF_NORMED, 'RL')
showPlot(DLR,DRL)
In [98]:
#w=9 disparity_range=150
DLR=disparity(L,R,9,150,cv2.TM_SQDIFF_NORMED, 'LR')
DRL=disparity(R,L,9,150,cv2.TM_SQDIFF_NORMED, 'RL')
showPlot(DLR,DRL)
In [99]:
#w=15 disparity_range=80
DLR=disparity(L,R,15,80,cv2.TM_SQDIFF_NORMED, 'LR')
DRL=disparity(R,L,15,80,cv2.TM_SQDIFF_NORMED, 'RL')
showPlot(DLR,DRL)

Question: How does changing the value of the window size affect the results? How does changing the value of the maximum disparity range affect the results?
Answer: See our answer in PDF file


Question 2: Analyzing the SSD Metric

Above we used SSD to compute the correspondences. However, SSD is not robust to small changes in the image.

Here, we will explore the sensitivity of SSD to small changes in the images. Specifically, repeat Question 1, but with the following changes:

  1. Add Gaussian noise to one or both of the images. Visualize the resulting inputs to make sure the noise is visible. Then, run the matching algorithm with SSD again.
  2. Add contrast to one of the images and re-run the SSD matching algorithm. (changing the contrast means multiplying the pixel values by some constant factor)

Display the results of both of the variations. What happened to the results?

In [101]:
# Demonstrate the sensitivity of the SSD matching algorithm as explained above
####################
def gauss_noise(img, mean=0, var=0.01):
    noise=np.random.normal(mean, var**0.5, img.shape)
    noisy_image=img+noise
    noisy_image_clipped=np.clip(noisy_image,0,255)
    plt.figure(figsize=(16, 16))
    plt.subplot(1, 2, 1)
    plt.imshow(noisy_image_clipped, cmap='gray')
    plt.subplot(1, 2, 2)
    plt.imshow(img,cmap='gray')
    plt.show()
    return noisy_image_clipped

def change_contrast(img, factor):
    contrastImage=img*factor
    return contrastImage
####################

Gaussian noise:

Add noise:

In [102]:
noisy_L=gauss_noise(L)

Compute Disparity Map:

In [103]:
DLR=disparity(noisy_L,R,9,110,cv2.TM_SQDIFF_NORMED,'LR')
DRL=disparity(R,noisy_L,9,110,cv2.TM_SQDIFF_NORMED, 'RL')
showPlot(DLR,DRL)

Contrast:

Change contrast:

In [104]:
#use L2 only here to visualize the change in contrast using plt.imshow
L2 = cv2.imread('input/correspondences/corr-img-l.png', cv2.IMREAD_GRAYSCALE) 
contrast_L=change_contrast(L,2)
visualizeChange=change_contrast(L2,2)
showPlot(visualizeChange, L)

Compute disparity map:

In [105]:
DLR=disparity(contrast_L,R, 9,110,cv2.TM_SQDIFF_NORMED,'LR')
DRL=disparity(R,contrast_L, 9,110,cv2.TM_SQDIFF_NORMED,'RL')
showPlot(DLR,DRL)

Question: Why is SSD sensitive to Gaussian noise? Why is it sensitive to changes in contrast?
Answer: See our answer in PDF file


Question 3: Implementing Normalized Cross Correlation (NCC) for Computing Disparity Map

We have now seen that the SSD metric is not robust to small changes between the input images. Therefore, in this section we will compute the correspondences using the Normalized Cross Correlation (NCC) metric introduced in class.

Repeat Questions 1 and 2 where now, NCC is used for computing the similarity between two window patches. As before, you may use cv2.matchTemplate for computing the similarity.

Again, explore different values for the window size and maximum disparity.

Think to yourself: how did the NCC measure affect the results when adding Gaussian noise? How about when changing the contrast? Do your results make sense?

In [ ]:
# Repeat Questions 1 and 2 using the NCC metric for computing similarities
####################
## YOUR CODE HERE 
####################

Repeat Question 1

Compute disparity maps for 6 combinations of window and disparity range, using the NCC metric:

In [106]:
#w=9 disparity_range=110
DLR=disparity(L,R,9,110,cv2.TM_CCORR_NORMED, 'LR')
DRL=disparity(R,L,9,110,cv2.TM_CCORR_NORMED, 'RL')
showPlot(DLR,DRL)
In [107]:
#w=15 disparity_range=110
DLR=disparity(L,R,15,110,cv2.TM_CCORR_NORMED, 'LR')
DRL=disparity(R,L,15,110,cv2.TM_CCORR_NORMED, 'RL')
showPlot(DLR,DRL)
In [108]:
#w=3 disparity_range=110
DLR=disparity(L,R,3,110,cv2.TM_CCORR_NORMED, 'LR')
DRL=disparity(R,L,3,110,cv2.TM_CCORR_NORMED, 'RL')
showPlot(DLR,DRL)
In [60]:
#w=3 disparity_range=150
DLR=disparity(L,R,3,150,cv2.TM_CCORR_NORMED, 'LR')
DRL=disparity(R,L,3,150,cv2.TM_CCORR_NORMED, 'RL')
showPlot(DLR,DRL)
In [69]:
#w=9 disparity_range=150
DLR=disparity(L,R,9,150,cv2.TM_CCORR_NORMED, 'LR')
DRL=disparity(R,L,9,150,cv2.TM_CCORR_NORMED, 'RL')
showPlot(DLR,DRL)
In [109]:
#w=15 disparity_range=80
DLR=disparity(L,R,15,80,cv2.TM_CCORR_NORMED, 'LR')
DRL=disparity(R,L,15,80,cv2.TM_CCORR_NORMED, 'RL')
showPlot(DLR,DRL)

Repeat Question 2

Gaussian noise:

In [110]:
DLR=disparity(noisy_L,R,9,110,cv2.TM_CCORR_NORMED,'LR')
DRL=disparity(R,noisy_L,9,110,cv2.TM_CCORR_NORMED, 'RL')
showPlot(DLR,DRL)

Contrast:

In [111]:
DLR=disparity(contrast_L,R, 9,110,cv2.TM_CCORR_NORMED,'LR')
DRL=disparity(R,contrast_L, 9,110,cv2.TM_CCORR_NORMED,'RL')
showPlot(DLR,DRL)